17,077 research outputs found

    Adapting robot task planning to user preferences: an assistive shoe dressing example

    Get PDF
    The final publication is available at link.springer.comHealthcare robots will be the next big advance in humans’ domestic welfare, with robots able to assist elderly people and users with disabilities. However, each user has his/her own preferences, needs and abilities. Therefore, robotic assistants will need to adapt to them, behaving accordingly. Towards this goal, we propose a method to perform behavior adaptation to the user preferences, using symbolic task planning. A user model is built from the user’s answers to simple questions with a fuzzy inference system, and it is then integrated into the planning domain. We describe an adaptation method based on both the user satisfaction and the execution outcome, depending on which penalizations are applied to the planner’s rules. We demonstrate the application of the adaptation method in a simple shoe-fitting scenario, with experiments performed in a simulated user environment. The results show quick behavior adaptation, even when the user behavior changes, as well as robustness to wrong inference of the initial user model. Finally, some insights in a non-simulated world shoe-fitting setup are also provided.Peer ReviewedPostprint (author's final draft

    Personalization framework for adaptive robotic feeding assistance

    Get PDF
    The final publication is available at link.springer.comThe deployment of robots at home must involve robots with pre-defined skills and the capability of personalizing their behavior by non-expert users. A framework to tackle this personalization is presented and applied to an automatic feeding task. The personalization involves the caregiver providing several examples of feeding using Learning-by- Demostration, and a ProMP formalism to compute an overall trajectory and the variance along the path. Experiments show the validity of the approach in generating different feeding motions to adapt to user’s preferences, automatically extracting the relevant task parameters. The importance of the nature of the demonstrations is also assessed, and two training strategies are compared. © Springer International Publishing AG 2016.Peer ReviewedPostprint (author's final draft

    Assistive robotics: research challenges and ethics education initiatives

    Get PDF
    Assistive robotics is a fast growing field aimed at helping healthcarers in hospitals, rehabilitation centers and nursery homes, as well as empowering people with reduced mobility at home, so that they can autonomously fulfill their daily living activities. The need to function in dynamic human-centered environments poses new research challenges: robotic assistants need to have friendly interfaces, be highly adaptable and customizable, very compliant and intrinsically safe to people, as well as able to handle deformable materials. Besides technical challenges, assistive robotics raises also ethical defies, which have led to the emergence of a new discipline: Roboethics. Several institutions are developing regulations and standards, and many ethics education initiatives include contents on human-robot interaction and human dignity in assistive situations. In this paper, the state of the art in assistive robotics is briefly reviewed, and educational materials from a university course on Ethics in Social Robotics and AI focusing on the assistive context are presented.Peer ReviewedPostprint (author's final draft

    Towards safety in physically assistive robots: eating assistance

    Get PDF
    Safety is one of the base elements to build trust in robots. This paper studies remedies to unavoidable collisions using robotics assistive feeding as an example task. Firstly, we propose an attention mechanism so the user can control the robot using gestures and thus prevent collisions. Secondly, when unwanted contacts are unavoidable we compare two safety strategies: active safety, using a force sensor to monitor maximum allowed forces; and passive safety using compliant controllers. Experimental evaluation shows that the gesture mechanism is effective to control the robot. Also, the impact forces obtained with both methods are similar and thus can be used independently. Additionally, users experimenting on purpose impacts declared that the impact was not harmful.Peer ReviewedPostprint (author's final draft

    Human-in-the-Loop Model Predictive Control of an Irrigation Canal

    Get PDF
    Until now, advanced model-based control techniques have been predominantly employed to control problems that are relatively straightforward to model. Many systems with complex dynamics or containing sophisticated sensing and actuation elements can be controlled if the corresponding mathematical models are available, even if there is uncertainty in this information. Consequently, the application of model-based control strategies has flourished in numerous areas, including industrial applications [1]-[3].Junta de AndalucĂ­a P11-TEP-812

    A real-time human-robot interaction system based on gestures for assistive scenarios

    Get PDF
    Natural and intuitive human interaction with robotic systems is a key point to develop robots assisting people in an easy and effective way. In this paper, a Human Robot Interaction (HRI) system able to recognize gestures usually employed in human non-verbal communication is introduced, and an in-depth study of its usability is performed. The system deals with dynamic gestures such as waving or nodding which are recognized using a Dynamic Time Warping approach based on gesture specific features computed from depth maps. A static gesture consisting in pointing at an object is also recognized. The pointed location is then estimated in order to detect candidate objects the user may refer to. When the pointed object is unclear for the robot, a disambiguation procedure by means of either a verbal or gestural dialogue is performed. This skill would lead to the robot picking an object in behalf of the user, which could present difficulties to do it by itself. The overall system — which is composed by a NAO and Wifibot robots, a KinectTM v2 sensor and two laptops — is firstly evaluated in a structured lab setup. Then, a broad set of user tests has been completed, which allows to assess correct performance in terms of recognition rates, easiness of use and response times.Postprint (author's final draft

    Tactile Mapping and Localization from High-Resolution Tactile Imprints

    Full text link
    This work studies the problem of shape reconstruction and object localization using a vision-based tactile sensor, GelSlim. The main contributions are the recovery of local shapes from contact, an approach to reconstruct the tactile shape of objects from tactile imprints, and an accurate method for object localization of previously reconstructed objects. The algorithms can be applied to a large variety of 3D objects and provide accurate tactile feedback for in-hand manipulation. Results show that by exploiting the dense tactile information we can reconstruct the shape of objects with high accuracy and do on-line object identification and localization, opening the door to reactive manipulation guided by tactile sensing. We provide videos and supplemental information in the project's website http://web.mit.edu/mcube/research/tactile_localization.html.Comment: ICRA 2019, 7 pages, 7 figures. Website: http://web.mit.edu/mcube/research/tactile_localization.html Video: https://youtu.be/uMkspjmDbq
    • …
    corecore